admission officer
Causal Perception
Alvarez, Jose M., Ruggieri, Salvatore
Perception occurs when two individuals interpret the same information differently. Despite being a known phenomenon with implications for bias in decision-making, as individuals' experience determines interpretation, perception remains largely overlooked in automated decision-making (ADM) systems. In particular, it can have considerable effects on the fairness or fair usage of an ADM system, as fairness itself is context-specific and its interpretation dependent on who is judging. In this work, we formalize perception under causal reasoning to capture the act of interpretation by an individual. We also formalize individual experience as additional causal knowledge that comes with and is used by an individual. Further, we define and discuss loaded attributes, which are attributes prone to evoke perception. Sensitive attributes, such as gender and race, are clear examples of loaded attributes. We define two kinds of causal perception, unfaithful and inconsistent, based on the causal properties of faithfulness and consistency. We illustrate our framework through a series of decision-making examples and discuss relevant fairness applications. The goal of this work is to position perception as a parameter of interest, useful for extending the standard, single interpretation ADM problem formulation.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Italy > Tuscany > Pisa Province > Pisa (0.04)
- Asia > Middle East > Jordan (0.04)
- (2 more...)
- Government (0.68)
- Education > Educational Setting (0.46)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.68)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.46)
'A real opportunity': how ChatGPT could help college applicants
Chatter about artificial intelligence mostly falls into three basic categories: anxious uncertainty (will it take our jobs?); In this hazy, liminal, pre-disruption moment, there is little consensus as to whether generative AI is a tool or a threat, and few rules for using it properly. For students, this uncertainty feels especially profound. Bans on AI and claims that using it constitutes cheating are now giving way to concerns that AI use is inevitable and probably should be taught in school. Now, as a new college admissions season kicks into gear, many prospective applicants are wondering: can AI write my personal essay?
- North America > United States > New York (0.05)
- North America > United States > Michigan (0.05)
- North America > United States > Kentucky > Jefferson County > Louisville (0.05)
- (5 more...)
- Law (1.00)
- Education > Educational Setting > Higher Education (0.93)
- Government > Regional Government > North America Government > United States Government (0.48)
The Supreme Court Killed the College-Admissions Essay
Nestled within yesterday's Supreme Court decision declaring that race-conscious admissions programs, like those at Harvard and the University of North Carolina, are unconstitutional is a crucial carveout: Colleges are free to consider "an applicant's discussion of how race affected his or her life." In other words, they can weigh a candidate's race when it is mentioned in an admissions essay. Observers had already speculated about personal essays becoming invaluable tools for candidates who want to express their racial background without checking a box--now it is clear that the end of affirmative action will transform not only how colleges select students, but also how teenagers advertise themselves to colleges. For essays and statements to provide a workaround for pursuing diversity, applicants must first cast themselves as diverse. The American Council on Education, a nonprofit focused on the impacts of public policy on higher education, recently convened a panel dedicated to planning for the demise of affirmative action; admissions directors and consultants emphasized the need "to educate students about how to write about who they are in a very different way," expressing their "full authentic story" and "trials and tribulations."
- Law (1.00)
- Education > Educational Setting > Higher Education (1.00)
ChatGPT is not the end of written integrity - The Georgetown Voice
When the first capable version of ChatGPT was released in November 2022, professors across the internet bemoaned the death of the undergraduate essay as a method to assess students. The Atlantic called the moment a "textpocalypse" and a writer from The New York Times said he was "deeply unsettled" following a conversation with Bing's integrated AI chatbot. ChatGPT, unlike earlier chatbots, has the capacity to generate coherent, long-form writing. ChatGPT has upended what it means to write. But, upon further analysis, it may not be the game-changer for writing or other industries that the world initially envisioned.
AI Machine-Learning: In Bias We Trust?
MIT researchers find that the explanation methods designed to help users determine whether to trust a machine-learning model's predictions can perpetuate biases and lead to worse outcomes for people from disadvantaged groups. According to a new study, explanation methods that help users determine whether to trust machine-learning model predictions can be less accurate for disadvantaged subgroups. Machine-learning algorithms are sometimes employed to assist human decision-makers when the stakes are high. For example, a model may predict which law school candidates are most likely to pass the bar exam, assisting admissions officers in deciding which students to admit. Because of the complexity of these models, often having millions of parameters, it is nearly impossible for AI researchers to fully understand how they make predictions.
- Education > Educational Setting > Higher Education (0.74)
- Education > Curriculum > Subject-Specific Education (0.58)
In bias we trust?
When the stakes are high, machine-learning models are sometimes used to aid human decision-makers. For instance, a model could predict which law school applicants are most likely to pass the bar exam to help an admissions officer determine which students should be accepted. These models often have millions of parameters, so how they make predictions is nearly impossible for researchers to fully understand, let alone an admissions officer with no machine-learning experience. Researchers sometimes employ explanation methods that mimic a larger model by creating simple approximations of its predictions. These approximations, which are far easier to understand, help users determine whether to trust the model's predictions.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.40)
- North America > Canada > Ontario > Toronto (0.15)
Experts see new roles for artificial intelligence in college admissions process
This story is from The Hill's Changing America publication. The job of a college admissions officer is not an easy one. For any competitive higher learning institution, the admissions process used to hand-pick each incoming student has also come under increasing scrutiny in recent years. To ensure the ongoing success of an institution, admissions officers are tasked with the nearly impossible task of efficiently evaluating thousands of applications each school year, with the expectation that their choices will reflect the institution's standards, grow diversity and inspire enough students to enroll. The process is a balancing act, and one that is expected to proceed without gender-based or racial bias.
Never mind the Elon--the forecast isn't that spooky for AI in business
Despite Elon Musk's warnings this summer, there's not a whole lot of reason to lose any sleep worrying about Skynet and the Terminator. Artificial Intelligence (AI) is far from becoming a maleficent, all-knowing force. The only "Apocalypse" on the horizon right now is an over reliance by humans on machine learning and expert systems, as demonstrated by the deaths of Tesla owners who took their hands off the wheel. Examples of what currently pass for "Artificial Intelligence"--technologies such as expert systems and machine learning--are excellent for creating software that can help in contexts that involve pattern recognition, automated decision-making, and human-to-machine conversations. Both types have been around for decades.
- North America > United States > Arizona > Maricopa County > Phoenix (0.05)
- Asia > Singapore (0.05)
- Information Technology (0.71)
- Health & Medicine > Therapeutic Area (0.49)